Processing of Kuala Lumpur Stock Exchange Resident on Hadoop MapReduce
نویسندگان
چکیده
The Kuala Lumpur Stock Exchange (KLSE) is the big data that need to be stored, processed and analyzed as it trade day-to-day. Analyzing and finding similar components (stock market price) may assist investor. However, it is not easy to find the similar components in the KLSE. This is because the components in the KLSE are changing everyday in the market. This paper focus is on using the Hadoop MapReduce to store and process the KLSE big data, use the kmeans algorithm to perform the calculation, then find the companies that had similar KLSE closing bids pattern to help the investors to predict a company’s next closing bid based on another company that have the similar trends. To facilitate the investors, the similar trend among companies will be shown on the Graphical User Interface (GUI). All the storing, processing and analyzing will be run automatically behind the scene of the GUI.
منابع مشابه
Adaptive Dynamic Data Placement Algorithm for Hadoop in Heterogeneous Environments
Hadoop MapReduce framework is an important distributed processing model for large-scale data intensive applications. The current Hadoop and the existing Hadoop distributed file system’s rack-aware data placement strategy in MapReduce in the homogeneous Hadoop cluster assume that each node in a cluster has the same computing capacity and a same workload is assigned to each node. Default Hadoop d...
متن کاملCloud Computing Technology Algorithms Capabilities in Managing and Processing Big Data in Business Organizations: MapReduce, Hadoop, Parallel Programming
The objective of this study is to verify the importance of the capabilities of cloud computing services in managing and analyzing big data in business organizations because the rapid development in the use of information technology in general and network technology in particular, has led to the trend of many organizations to make their applications available for use via electronic platforms hos...
متن کاملHadoop Mapreduce Framework in Big Data Analytics
As Hadoop is a Substantial scale, open source programming system committed to adaptable, disseminated, information concentrated processing. Hadoop [1] Mapreduce is a programming structure for effectively composing requisitions which prepare boundless measures of information (multi-terabyte information sets) inparallel on extensive bunches (many hubs) of merchandise fittings in a dependable, sho...
متن کاملPrediction of BSE Stock Data using MapReduce K-Mean Cluster Algorithm
Bombay Stock Exchange (BSE) Limited, established in 1875 as the Native Share and Stock Brokers' Association is considered to be one of Asia’s fastest stock exchanges and oldest stock exchange in the South Asia region. On 31 August 1957, the BSE became the first stock exchange to be recognized by the Indian Government under the Securities Contracts Regulation Act 1956. In this paper, we develope...
متن کاملDeveloping Prediction Model for Stock Exchange Data Set Using Hadoop Map Reduce Technique
---------------------------------------------------------------------***--------------------------------------------------------------------ABSTRACTStock Market has high profit and high risk features which tells why its prediction must be close to accurate. The main issue about such data sets is that these are very complex nonlinear functions and can only be learnt by a data mining methods to r...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014